872 research outputs found
Sparse representation based hyperspectral image compression and classification
Abstract
This thesis presents a research work on applying sparse representation to lossy hyperspectral image
compression and hyperspectral image classification. The proposed lossy hyperspectral image
compression framework introduces two types of dictionaries distinguished by the terms sparse
representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively.
The former is learnt in the spectral domain to exploit the spectral correlations, and the
latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in
hyperspectral images. To alleviate the computational demand of dictionary learning, either a
base dictionary trained offline or an update of the base dictionary is employed in the compression
framework. The proposed compression method is evaluated in terms of different objective
metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including
JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of
both SRSD and MSSD approaches.
For the proposed hyperspectral image classification method, we utilize the sparse coefficients
for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular,
the discriminative character of the sparse coefficients is enhanced by incorporating contextual
information using local mean filters. The classification performance is evaluated and compared
to a number of similar or representative methods. The results show that our approach could outperform
other approaches based on SVM or sparse representation.
This thesis makes the following contributions. It provides a relatively thorough investigation
of applying sparse representation to lossy hyperspectral image compression. Specifically,
it reveals the effectiveness of sparse representation for the exploitation of spectral correlations
in hyperspectral images. In addition, we have shown that the discriminative character of sparse
coefficients can lead to superior performance in hyperspectral image classification.EM201
On the stability in terms of two measures for perturbed impulsive integro-differential equations
AbstractThis paper establishes several stability criteria for perturbed impulsive integro-differential equations with fixed moments of impulsive effect. By using a new comparison theorem, which connects the solutions of perturbed system and the unperturbed one, some sufficient conditions for the stability in terms of two measures are obtained for the perturbed system while unperturbed one dissatisfied which because of the effect of the perturbed terms
Translated Skip Connections -- Expanding the Receptive Fields of Fully Convolutional Neural Networks
The effective receptive field of a fully convolutional neural network is an
important consideration when designing an architecture, as it defines the
portion of the input visible to each convolutional kernel. We propose a neural
network module, extending traditional skip connections, called the translated
skip connection. Translated skip connections geometrically increase the
receptive field of an architecture with negligible impact on both the size of
the parameter space and computational complexity. By embedding translated skip
connections into a benchmark architecture, we demonstrate that our module
matches or outperforms four other approaches to expanding the effective
receptive fields of fully convolutional neural networks. We confirm this result
across five contemporary image segmentation datasets from disparate domains,
including the detection of COVID-19 infection, segmentation of aerial imagery,
common object segmentation, and segmentation for self-driving cars.Comment: 5 pages, 2 figures, 1 table, published at the 2022 IEEE International
Conference on Image Processin
Federated Learning with Classifier Shift for Class Imbalance
Federated learning aims to learn a global model collaboratively while the
training data belongs to different clients and is not allowed to be exchanged.
However, the statistical heterogeneity challenge on non-IID data, such as class
imbalance in classification, will cause client drift and significantly reduce
the performance of the global model. This paper proposes a simple and effective
approach named FedShift which adds the shift on the classifier output during
the local training phase to alleviate the negative impact of class imbalance.
We theoretically prove that the classifier shift in FedShift can make the local
optimum consistent with the global optimum and ensure the convergence of the
algorithm. Moreover, our experiments indicate that FedShift significantly
outperforms the other state-of-the-art federated learning approaches on various
datasets regarding accuracy and communication efficiency
- …